Countermeasures Against Adversarial Examples in Radio Signal Classification

نویسندگان

چکیده

Deep learning algorithms have been shown to be powerful in many communication network design problems, including that automatic modulation classification. However, they are vulnerable carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep poses a serious threat security and operation networks. In this letter, we propose for first time countermeasure against examples Our is based neural rejection technique, augmented by label smoothing Gaussian noise injection, allows detect reject with high accuracy. results demonstrate proposed can protect deep-learning classification systems

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Certified Defenses against Adversarial Examples

While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, ...

متن کامل

Ensembling as a Defense Against Adversarial Examples

Adversarial attacks on machine learning systems take two main flavors. First, there are trainingtime attacks, which involve compromising the training data that the system is trained on. Unsurprisingly, machines can misclassify examples, if they are trained on malicious data. Second, there are test-time attacks, which involve crafting an adversarial example, which a human would easily classify a...

متن کامل

Verifying Controllers Against Adversarial Examples with Bayesian Optimization

Recent successes in reinforcement learning havelead to the development of complex controllers for real-world robots. As these robots are deployed in safety-criticalapplications and interact with humans, it becomes critical toensure safety in order to avoid causing harm. A first step inthis direction is to test the controllers in simulation. To beable to do this, we need ...

متن کامل

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters. Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples. The cat-and-mous...

متن کامل

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Wireless Communications Letters

سال: 2021

ISSN: ['2162-2337', '2162-2345']

DOI: https://doi.org/10.1109/lwc.2021.3083099